Goto

Collaborating Authors

 right lane


A novel framework for adaptive stress testing of autonomous vehicles in multi-lane roads

Trinh, Linh, Luu, Quang-Hung, Nguyen, Thai M., Vu, Hai L.

arXiv.org Artificial Intelligence

Stress testing is an approach for evaluating the reliability of systems under extreme conditions which help reveal vulnerable scenarios that standard testing may overlook. Identifying such scenarios is of great importance in autonomous vehicles (AV) and other safety-critical systems. Since failure events are rare, naive random search approaches require a large number of vehicle operation hours to identify potential system failures. Adaptive Stress Testing (AST) is a method addressing this constraint by effectively exploring the failure trajectories of AV using a Markov decision process and employs reinforcement learning techniques to identify driving scenarios with high probability of failures. However, existing AST frameworks are able to handle only simple scenarios, such as one vehicle moving longitudinally on a single lane road which is not realistic and has a limited applicability. In this paper, we propose a novel AST framework to systematically explore corner cases of intelligent driving models that can result in safety concerns involving both longitudinal and lateral vehicle's movements. Specially, we develop a new reward function for Deep Reinforcement Learning to guide the AST in identifying crash scenarios based on the collision probability estimate between the AV under test (i.e., the ego vehicle) and the trajectory of other vehicles on the multi-lane roads. To demonstrate the effectiveness of our framework, we tested it with a complex driving model vehicle that can be controlled in both longitudinal and lateral directions. Quantitative and qualitative analyses of our experimental results demonstrate that our framework outperforms the state-of-the-art AST scheme in identifying corner cases with complex driving maneuvers.


Exploring Backdoor Attacks against Large Language Model-based Decision Making

Jiao, Ruochen, Xie, Shaoyuan, Yue, Justin, Sato, Takami, Wang, Lixu, Wang, Yixuan, Chen, Qi Alfred, Zhu, Qi

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown significant promise in decision-making tasks when fine-tuned on specific applications, leveraging their inherent common sense and reasoning abilities learned from vast amounts of data. However, these systems are exposed to substantial safety and security risks during the fine-tuning phase. In this work, we propose the first comprehensive framework for Backdoor Attacks against LLM-enabled Decision-making systems (BALD), systematically exploring how such attacks can be introduced during the fine-tuning phase across various channels. Specifically, we propose three attack mechanisms and corresponding backdoor optimization methods to attack different components in the LLM-based decision-making pipeline: word injection, scenario manipulation, and knowledge injection. Word injection embeds trigger words directly into the query prompt. Scenario manipulation occurs in the physical environment, where a high-level backdoor semantic scenario triggers the attack. Knowledge injection conducts backdoor attacks on retrieval augmented generation (RAG)-based LLM systems, strategically injecting word triggers into poisoned knowledge while ensuring the information remains factually accurate for stealthiness. We conduct extensive experiments with three popular LLMs (GPT-3.5, LLaMA2, PaLM2), using two datasets (HighwayEnv, nuScenes), and demonstrate the effectiveness and stealthiness of our backdoor triggers and mechanisms. Finally, we critically assess the strengths and weaknesses of our proposed approaches, highlight the inherent vulnerabilities of LLMs in decision-making tasks, and evaluate potential defenses to safeguard LLM-based decision making systems.


Predicting highway lane-changing maneuvers: A benchmark analysis of machine and ensemble learning algorithms

Khelfa, Basma, Ba, Ibrahima, Tordeux, Antoine

arXiv.org Artificial Intelligence

Understanding and predicting highway lane-change maneuvers is essential for driving modeling and its automation. The development of data-based lane-changing decision-making algorithms is nowadays in full expansion. We compare empirically in this article different machine and ensemble learning classification techniques to the MOBIL rule-based model using trajectory data of European two-lane highways. The analysis relies on instantaneous measurements of up to twenty-four spatial-temporal variables with the four neighboring vehicles on current and adjacent lanes. Preliminary descriptive investigations by principal component and logistic analyses allow identifying main variables intending a driver to change lanes. We predict two types of discretionary lane-change maneuvers: overtaking (from the slow to the fast lane) and fold-down (from the fast to the slow lane). The prediction accuracy is quantified using total, lane-changing and lane-keeping errors and associated receiver operating characteristic curves. The benchmark analysis includes logistic model, linear discriminant, decision tree, na\"ive Bayes classifier, support vector machine, neural network machine learning algorithms, and up to ten bagging and stacking ensemble learning meta-heuristics. If the rule-based model provides limited predicting accuracy, the data-based algorithms, devoid of modeling bias, allow significant prediction improvements. Cross validations show that selected neural networks and stacking algorithms allow predicting from a single observation both fold-down and overtaking maneuvers up to four seconds in advance with high accuracy.


Those Infuriating Drivers That Take Over The Left Lane And Prevent Passing Will Undoubtedly Be Stifling For AI Self-Driving Cars

#artificialintelligence

Difficulties in left lane usage are common and exasperating. I'm referring to those darned drivers that sit in the left lane nearly forever, cruising leisurely along without a seeming care in the world, backing up traffic as they do so. You've undoubtedly been stuck behind such a driver. It is exasperating, infuriating, and altogether makes you want to bust a gasket. They get into the left lane and occupy the lane as though it is owned by them. On top of this, they decide to be the unofficial determiner of the allowed speed for the rest of nearby traffic. For example, even though the posted speed limit might be 65 miles per hour, the left lane hog will opt to go at say 55 miles per hour. There are lots of frequently cited reasons or excuses for this type of behavior. One claim is that they are going at the safest appropriate speed. This is based on the logic that the posted speed is the maximum allowed speed, which is not necessarily the safest allowed speed. Indeed, the driver's handbook clearly states that you should never assume that the posted speed is the speed that you are to be driving at.


Risk-Aware Lane Selection on Highway with Dynamic Obstacles

Bae, Sangjae, Isele, David, Fujimura, Kikuo, Moura, Scott J.

arXiv.org Artificial Intelligence

This paper proposes a discretionary lane selection algorithm. In particular, highway driving is considered as a targeted scenario, where each lane has a different level of traffic flow. When lane-changing is discretionary, it is advised not to change lanes unless highly beneficial, e.g., reducing travel time significantly or securing higher safety. Evaluating such "benefit" is a challenge, along with multiple surrounding vehicles in dynamic speed and heading with uncertainty. We propose a real-time lane-selection algorithm with careful cost considerations and with modularity in design. The algorithm is a search-based optimization method that evaluates uncertain dynamic positions of other vehicles under a continuous time and space domain. For demonstration, we incorporate a state-of-the-art motion planner framework (Neural Networks integrated Model Predictive Control) under a CARLA simulation environment.


GPS system upgrade utilizes AI to make sure you're in the right lane

#artificialintelligence

In-car satnav systems and mobile mapping apps have made it much easier to travel from one place to another without getting lost, but a new innovation promises to help fix a remaining pain point – getting in the right lane at intersections. Today's mapping apps aren't always much help if you're at an unfamiliar intersection and aren't sure exactly where on the road your car is supposed to be: the apps often don't have the detail or the knowledge to warn you in good time about changing lanes. The system developed by researchers at MIT and the Qatar Computing Research Institute uses satellite imagery to augment existing mapping data, but the smart part is applying artificial intelligence to work out the layout of roads hidden by trees and buildings. It's called RoadTagger, and by deploying machine learning on satellite imagery, the system is able to figure out with a high degree of accuracy some extra details on roads – including, for example, how many lanes they have. That could give drivers an early warning about diverging or merging lanes.


Tesla's Navigate on Autopilot was my CES road trip companion

Engadget

I love a good road trip. I've spent hundreds of thousands of miles in cars during my life, and the best times were when I knew it would be hours or even days before I reached my destination. Typically a friend (or friends) or family members would accompany me, but on a few occasions, it was just me, my music collection -- and scenery screaming past me at 70 miles per hour. In the past few years, more and more automakers have created semiautonomous systems so that you're no longer "alone" on these drives. One of the more robust (and most famous) is Tesla's Autopilot.


Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation

Kahn, Gregory, Villaflor, Adam, Abbeel, Pieter, Levine, Sergey

arXiv.org Artificial Intelligence

A general-purpose intelligent robot must be able to learn autonomously and be able to accomplish multiple tasks in order to be deployed in the real world. However, standard reinforcement learning approaches learn separate task-specific policies and assume the reward function for each task is known a priori. We propose a framework that learns event cues from off-policy data, and can flexibly combine these event cues at test time to accomplish different tasks. These event cue labels are not assumed to be known a priori, but are instead labeled using learned models, such as computer vision detectors, and then `backed up' in time using an action-conditioned predictive model. We show that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks. Videos of the experiments and code can be found at https://github.com/gkahn13/CAPs


Many of Our Beliefs Are Unconscious: A Response to Nick Chater - Facts So Romantic

Nautilus

Nick Chater has put forward a bold claim in his recent book, The Mind Is Flat, as well as in an article and interview in Nautilus: that we don't have any unconscious thoughts. A metaphor that Chater, a behavioral scientist, dislikes is that of the iceberg, the tip of which is our consciousness, and the vast, submerged part is our unconscious. As Chater says in the Nautilus interview, this suggests that unconscious and conscious processes use the same kinds of representations, and that the kinds of things we are unconscious of we could be conscious of. He's certainly right that many brain processes go on that we're unaware of, and can't be aware of. Let's take visual recognition as an example.


Tesla Model 3 on Autopilot avoids crash in near-miss caught on dashcam

#artificialintelligence

Accidents involving Tesla vehicles on Autopilot often get reported in the media, but we don't hear a lot about the accidents that didn't happen because of Autopilot since it's not as exciting when virtually nothing happened – though it's arguably just as important. Now we have a good example with a Tesla Model 3 on Autopilot avoiding a crash in near-miss caught on a dashcam. Tesla's Autopilot technology comes with a suite of crash avoidance features including side collision avoidance, which can alert the driver of a collision risk and even brake and steer away from a crash if it believes it to be safe to do so. That's exactly what a Model 3 owner believes happened in a near-miss caught on camera. "Close call while cruising on the highway along with traffic when an idiot who was speeding and cutting everyone off almost sideswiped us with kid inside. Autopilot was engaged and started to brake and moved us to the right lane to avoid a collision. I guess it detected no vehicles on the right of us and I took over and powered out to steer us back into the original lane in front of that idiot. Be safe out there and always be alert even with Autopilot engaged and watch out for idiot drivers."